13 research outputs found

    Augmented Reality Navigation Interfaces Improve Human Performance In End-Effector Controlled Telerobotics

    Get PDF
    On the International Space Station (ISS) and space shuttles, the National Aeronautics and Space Administration (NASA) has used robotic manipulators extensively to perform payload handling and maintenance tasks. Teleoperating robots require expert skills and optimal performance is crucial to mission completion and crew safety. Degradation in performance is observed when manual control is mediated through remote camera views, resulting in poor end-effector navigation quality and extended task completion times. This thesis explores the application of three-dimensional augmented reality (AR) interfaces specifically designed to improve human performance during end-effector controlled teleoperations. A modular telerobotic test bed was developed for this purpose and several experiments were conducted. In the first experiment, the effect of camera placement on end-effector manipulation performance was evaluated. Results show that increasing misalignment between the displayed end-effector and hand-controller axes (display-control misalignments) increases the time required to process a movement input. Simple AR movement cues were found to mitigate the adverse effects of camera-based teleoperation and made performance invariant to misalignment. Applying these movement cues to payload transport tasks correspondingly demonstrated improvements in free-space navigation quality over conventional end-effector control using multiple cameras. Collision-free teleoperations are also a critical requirement in space. To help the operators guide robots safely, a novel method was evaluated. Navigation plans computed by a planning agent are presented to the operator sequentially through an AR interface. The plans in combination with the interface allow the operator to guide the end-effector through collision-free regions in the remote environment safely. Experimental results show significant benefits in control performance including reduced path deviation and travel distance. Overall, the results show that AR interfaces can improve performance during manual control of remote robots and have tremendous potential in current and future teleoperated space robotic systems; as well as in contemporary military and surgical applications

    Robot Assistance in Dynamic Smart Environments—A Hierarchical Continual Planning in the Now Framework

    Get PDF
    By coupling a robot to a smart environment, the robot can sense state beyond the perception range of its onboard sensors and gain greater actuation capabilities. Nevertheless, incorporating the states and actions of Internet of Things (IoT) devices into the robot’s onboard planner increases the computational load, and thus can delay the execution of a task. Moreover, tasks may be frequently replanned due to the unanticipated actions of humans. Our framework aims to mitigate these inadequacies. In this paper, we propose a continual planning framework, which incorporates the sensing and actuation capabilities of IoT devices into a robot’s state estimation, task planing and task execution. The robot’s onboard task planner queries a cloud-based framework for actuators, capable of the actions the robot cannot execute. Once generated, the plan is sent to the cloud back-end, which will inform the robot if any IoT device reports a state change affecting its plan. Moreover, a Hierarchical Continual Planning in the Now approach was developed in which tasks are split-up into subtasks. To delay the planning of actions that will not be promptly executed, and thus to reduce the frequency of replanning, the first subtask is planned and executed before the subsequent subtask is. Only information relevant to the current (sub)task is provided to the task planner. We apply our framework to a smart home and office scenario in which the robot is tasked with carrying out a human’s requests. A prototype implementation in a smart home, and simulator-based evaluation results, are presented to demonstrate the effectiveness of our framework

    User-Centered Design

    Get PDF
    The successful introduction and acceptance of novel technological tools are only possible if end users are completely integrated in the design process. However, obtaining such integration of end users is not obvious, as end‐user organizations often do not consider research toward new technological aids as their core business and are therefore reluctant to engage in these kinds of activities. This chapter explains how this problem was tackled in the ICARUS project, by carefully identifying and approaching the targeted user communities and by compiling user requirements. Resulting from these user requirements, system requirements and a system architecture for the ICARUS system were deduced. An important aspect of the user‐centered design approach is that it is an iterative methodology, based on multiple intermediate operational validations by end users of the developed tools, leading to a final validation according to user‐scripted validation scenarios

    Command and Control Systems for Search and Rescue Robots

    Get PDF
    The novel application of unmanned systems in the domain of humanitarian Search and Rescue (SAR) operations has created a need to develop specific multi-Robot Command and Control (RC2) systems. This societal application of robotics requires human-robot interfaces for controlling a large fleet of heterogeneous robots deployed in multiple domains of operation (ground, aerial and marine). This chapter provides an overview of the Command, Control and Intelligence (C2I) system developed within the scope of Integrated Components for Assisted Rescue and Unmanned Search operations (ICARUS). The life cycle of the system begins with a description of use cases and the deployment scenarios in collaboration with SAR teams as end-users. This is followed by an illustration of the system design and architecture, core technologies used in implementing the C2I, iterative integration phases with field deployments for evaluating and improving the system. The main subcomponents consist of a central Mission Planning and Coordination System (MPCS), field Robot Command and Control (RC2) subsystems with a portable force-feedback exoskeleton interface for robot arm tele-manipulation and field mobile devices. The distribution of these C2I subsystems with their communication links for unmanned SAR operations is described in detail. Field demonstrations of the C2I system with SAR personnel assisted by unmanned systems provide an outlook for implementing such systems into mainstream SAR operations in the future

    An artificial intelligence-based collaboration approach in industrial IoT manufacturing : key concepts, architectural extensions and potential applications

    Get PDF
    The digitization of manufacturing industry has led to leaner and more efficient production, under the Industry 4.0 concept. Nowadays, datasets collected from shop floor assets and information technology (IT) systems are used in data-driven analytics efforts to support more informed business intelligence decisions. However, these results are currently only used in isolated and dispersed parts of the production process. At the same time, full integration of artificial intelligence (AI) in all parts of manufacturing systems is currently lacking. In this context, the goal of this manuscript is to present a more holistic integration of AI by promoting collaboration. To this end, collaboration is understood as a multi-dimensional conceptual term that covers all important enablers for AI adoption in manufacturing contexts and is promoted in terms of business intelligence optimization, human-in-the-loop and secure federation across manufacturing sites. To address these challenges, the proposed architectural approach builds on three technical pillars: (1) components that extend the functionality of the existing layers in the Reference Architectural Model for Industry 4.0; (2) definition of new layers for collaboration by means of human-in-the-loop and federation; (3) security concerns with AI-powered mechanisms. In addition, system implementation aspects are discussed and potential applications in industrial environments, as well as business impacts, are presented

    Architecture for incorporating Internet-of-Things sensors and actuators into robot task planning in dynamic environments

    Get PDF
    Robots are being deployed in a wide range of smart environments that are equipped with sensors and actuators. These devices can provide valuable information beyond the perception range of a robot's on-board sensors, or provide additional actuators that can complement the robot's actuation abilities. Traditional robot task planners do not take these additional sensor and actuators abilities into account. This paper introduces an enhanced robotic planning framework which improves robots' ability to operate in dynamically changing environments. To keep planning time short, the amount of knowledge in the planner's world model is minimized

    Joystick mapped Augmented Reality Cues for End-Effector controlled Tele-operated Robots

    No full text
    End-effector control of robots using just remote camera views is difficult due to lack of perceived correspondence between the joysticks and the end-effector coordinate frame. This paper reports the positive effects of Augmented Reality visual cues on operator performance during end-effector controlled tele-operation using only camera views. Our solution is to overlay a color-coded coordinate system on the end-effector of the robot using AR techniques. This mapped and color-coded coordinate system is then directly mapped to similarly color-coded joysticks used for control of both position and orientation. The AR view along with mapped markings on the joystick give the user a clear notion of the effect of their joystick movements on the end-effector of the robot. All camera views display this registered dynamic overlay information on-demand. A preliminary test using fifteen subjects comparing control of performance with and without the coordinate mapping was performed by using a simple insertion task. Preliminary results indicate a significant reduction in distance, reversal errors and mental workload
    corecore